Goto

Collaborating Authors

 new ai rule


Four things to know about China's new AI rules in 2024

MIT Technology Review

Some of those people are policymakers, who have been trying hard to respond to the problems AI products pose without reducing our ability to harness their power. So at the beginning of this year, my colleagues and I looked around the world for signs of how AI regulations are likely to change this year. We summarized what we found here. In China, one of the major moves to be on the lookout for in 2024 is whether the country will follow in the European Union's footsteps and announce its own comprehensive AI Act. In June of last year, China's top governing body released a list of legislation they were working on.

  Country: Asia > China (1.00)
  Industry:

How China's New AI Rules Could Affect U.S. Companies

TIME - Tech

Soon after China's artificial intelligence rules came into effect last month, a series of new AI chatbots began trickling onto the market, with government approval. The rules have already been watered down from what was initially proposed, and so far, China hasn't enforced them as strictly as it could, experts say. China's regulatory approach will likely have huge implications for the technological competition between the country and its AI superpower rival the U.S. The Cyberspace Administration of China's (CAC) Generative AI Measures, which came into effect on Aug. 15, are some of the strictest in the world. They state that the generative AI services should not generate content "inciting subversion of national sovereignty or the overturn of the socialist system," or "advocating terrorism or extremism, promoting ethnic hatred and ethnic discrimination, violence and obscenity, as well as fake and harmful information." Preventing AI chatbots from spewing out unwanted or even toxic content has been a challenge for AI developers around the world.


OpenAI Could Quit Europe Over New AI Rules, CEO Sam Altman Warns

TIME - Tech

OpenAI CEO Sam Altman said Wednesday his company could "cease operating" in the European Union if it is unable to comply with the provisions of new artificial intelligence legislation that the bloc is currently preparing. "We're gonna try to comply," Altman said on the sidelines of a panel discussion at University College London, part of an ongoing tour of European countries. He said he had met with E.U. regulators to discuss the AI act as part of his tour, and added that OpenAI had "a lot" of criticisms of the way the act is currently worded. Altman said that OpenAI's skepticism centered on the E.U. law's designation of "high risk" systems as it is currently drafted. The law is still undergoing revisions, but under its current wording it may require large AI models like OpenAI's ChatGPT and GPT-4 to be designated as "high risk," forcing the companies behind them to comply with additional safety requirements.


Trust in EU approach to artificial intelligence risks being undermined by new AI rules

#artificialintelligence

The EU is winning the battle for trust among artificial intelligence (AI) researchers, academics on both sides of the Atlantic say, bolstering the Commission's ambitions to set global standards for the technology. But some fear the EU risks squandering this confidence by imposing ill-thought through rules in its recently proposed Artificial Intelligence act, which some academics say are at odds with the realities of AI research. "We do see a push for trustworthy and transparent AI also in the US, but, in terms of governance, we are not as far [ahead] as the EU in this regard," said Bart Selman, president of the Association for Advancement of Artificial Intelligence (AAAI) and a professor at Cornell University. Highly international AI researchers are "aware that AI developments in the US are dominated by business interests, and in China by the government interest," said Holger Hoos, professor of machine learning at Leiden University, and a founder of the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE). EU policymaking, though slower, incorporated "more voices, and more perspectives" than the more centralised process in the US and China, he argued, with the EU having taken strong action on privacy through the General Data Protection regulation, which came into effect in 2018.


Europe contemplates new rules for AI – and what this might mean in A/NZ

#artificialintelligence

At the beginning of 2021, the European Commission will propose legislation on AI that will be, at first instance, horizontal (as opposed to sectoral) and risk-based, with mandatory requirements for high-risk AI applications. The new rules will aim at ensuring transparency, accountability and consumer protection, including safety, through robust AI governance and data quality requirements. Europe's approach to regulating technology is based on the precautionary principle, which enables rapid regulatory intervention in the face of possible danger to human, animal or plant health, or to protect the environment. This perspective has helped Europe to become a global leader in the shaping of the digital technology market. Particularly, with the introduction of the General Data Protection Regulation (GDPR) in 2018, Europe considers it has gained a competitive advantage through the creation of a trust mark for increased privacy protection. Australia and New Zealand have a close relationship with the European Union (EU) and its member countries historically.


EU's new AI rules will focus on ethics and transparency

#artificialintelligence

The European Union is set to release new regulations for artificial intelligence that are expected to focus on transparency and oversight as the region seeks to differentiate its approach from those of the United States and China. On Wednesday, EU technology chief Margrethe Vestager will unveil a wide-ranging plan designed to bolster the region's competitiveness. While transformative technologies such as AI have been labeled critical to economic survival, Europe is perceived as slipping behind the U.S., where development is being led by tech giants with deep pockets, and China, where the central government is leading the push. Europe has in recent years sought to emphasize fairness and ethics when it comes to tech policy. These systems would require human oversight and audits, according to a widely leaked draft of the new rules.